256 research outputs found
DOA estimation using multiple measurement vector model with sparse solutions in linear array scenarios
A novel algorithm is presented based on sparse multiple measurement vector (MMV) model for direction of arrival (DOA) estimation of far-field narrowband sources. The algorithm exploits singular value decomposition denoising to enhance the reconstruction process. The proposed multiple nature of MMV model enables the simultaneous processing of several data snapshots to obtain greater accuracy in the DOA estimation. The DOA problem is addressed in both uniform linear array (ULA) and nonuniform linear array (NLA) scenarios. Superior performance is demonstrated in terms of root mean square error and running time of the proposed method when compared with conventional compressed sensing methods such as simultaneous orthogonal matching pursuit (S-OMP), l_2,1 minimization, and root-MUISC
Reconstruction of Demand Shocks in Input-Output Networks
Input-Output analysis describes the dependence of production, demand and
trade between sectors and regions and allows to understand the propagation of
economic shocks through economic networks. A central challenge in practical
applications is the availability of data. Observations may be limited to the
impact of the shocks in few sectors, but a complete picture of the origin and
impacts would be highly desirable to guide political countermeasures. In this
article we demonstrate that a shock in the final demand in few sectors can be
fully reconstructed from limited observations of production changes. We adapt
three algorithms from sparse signal recovery and evaluate their performance and
their robustness to observation uncertainties.Comment: 10 pages, 4 figures, conference proceeding for CompleNet 202
k is the Magic Number -- Inferring the Number of Clusters Through Nonparametric Concentration Inequalities
Most convex and nonconvex clustering algorithms come with one crucial
parameter: the in -means. To this day, there is not one generally
accepted way to accurately determine this parameter. Popular methods are simple
yet theoretically unfounded, such as searching for an elbow in the curve of a
given cost measure. In contrast, statistically founded methods often make
strict assumptions over the data distribution or come with their own
optimization scheme for the clustering objective. This limits either the set of
applicable datasets or clustering algorithms. In this paper, we strive to
determine the number of clusters by answering a simple question: given two
clusters, is it likely that they jointly stem from a single distribution? To
this end, we propose a bound on the probability that two clusters originate
from the distribution of the unified cluster, specified only by the sample mean
and variance. Our method is applicable as a simple wrapper to the result of any
clustering method minimizing the objective of -means, which includes
Gaussian mixtures and Spectral Clustering. We focus in our experimental
evaluation on an application for nonconvex clustering and demonstrate the
suitability of our theoretical results. Our \textsc{SpecialK} clustering
algorithm automatically determines the appropriate value for , without
requiring any data transformation or projection, and without assumptions on the
data distribution. Additionally, it is capable to decide that the data consists
of only a single cluster, which many existing algorithms cannot
Necessary and sufficient conditions of solution uniqueness in minimization
This paper shows that the solutions to various convex minimization
problems are \emph{unique} if and only if a common set of conditions are
satisfied. This result applies broadly to the basis pursuit model, basis
pursuit denoising model, Lasso model, as well as other models that
either minimize or impose the constraint , where
is a strictly convex function. For these models, this paper proves that,
given a solution and defining I=\supp(x^*) and s=\sign(x^*_I),
is the unique solution if and only if has full column rank and there
exists such that and for . This
condition is previously known to be sufficient for the basis pursuit model to
have a unique solution supported on . Indeed, it is also necessary, and
applies to a variety of other models. The paper also discusses ways to
recognize unique solutions and verify the uniqueness conditions numerically.Comment: 6 pages; revised version; submitte
Learning Partially Shared Dictionaries for Domain Adaptation
Abstract. Real world applicability of many computer vision solutions is constrained by the mismatch between the training and test domains. This mismatch might arise because of factors such as change in pose, lighting conditions, quality of imaging devices, intra-class variations in-herent in object categories etc. In this work, we present a dictionary learning based approach to tackle the problem of domain mismatch. In our approach, we jointly learn dictionaries for the source and the target domains. The dictionaries are partially shared, i.e. some elements are common across both the dictionaries. These shared elements can rep-resent the information which is common across both the domains. The dictionaries also have some elements to represent the domain specific information. Using these dictionaries, we separate the domain specific information and the information which is common across the domains. We use the latter for training cross-domain classifiers i.e., we build classi-fiers that work well on a new target domain while using labeled examples only in the source domain. We conduct cross-domain object recognition experiments on popular benchmark datasets and show improvement in results over the existing state of art domain adaptation approaches.
Perceptual Compressive Sensing
Compressive sensing (CS) works to acquire measurements at sub-Nyquist rate
and recover the scene images. Existing CS methods always recover the scene
images in pixel level. This causes the smoothness of recovered images and lack
of structure information, especially at a low measurement rate. To overcome
this drawback, in this paper, we propose perceptual CS to obtain high-level
structured recovery. Our task no longer focuses on pixel level. Instead, we
work to make a better visual effect. In detail, we employ perceptual loss,
defined on feature level, to enhance the structure information of the recovered
images. Experiments show that our method achieves better visual results with
stronger structure information than existing CS methods at the same measurement
rate.Comment: Accepted by The First Chinese Conference on Pattern Recognition and
Computer Vision (PRCV 2018). This is a pre-print version (not final version
Minimizing Acquisition Maximizing Inference -- A demonstration on print error detection
Is it possible to detect a feature in an image without ever looking at it?
Images are known to have sparser representation in Wavelets and other similar
transforms. Compressed Sensing is a technique which proposes simultaneous
acquisition and compression of any signal by taking very few random linear
measurements (M). The quality of reconstruction directly relates with M, which
should be above a certain threshold for a reliable recovery. Since these
measurements can non-adaptively reconstruct the signal to a faithful extent
using purely analytical methods like Basis Pursuit, Matching Pursuit, Iterative
thresholding, etc., we can be assured that these compressed samples contain
enough information about any relevant macro-level feature contained in the
(image) signal. Thus if we choose to deliberately acquire an even lower number
of measurements - in order to thwart the possibility of a comprehensible
reconstruction, but high enough to infer whether a relevant feature exists in
an image - we can achieve accurate image classification while preserving its
privacy. Through the print error detection problem, it is demonstrated that
such a novel system can be implemented in practise
- …